This is a talk given by Geoff Hinton, Dept. of Computer Science, University of Toronto, at the Redwood Center for Theoretical Neuroscience on November 27, 2007.
Neurons need to represent both the presence of a feature in the sensory input and the derivative of an error function with respect to the neural activity. I will describe a simple way in which they can represent both of these very different quantities at the same time and show that this representational scheme would make it easy for real neurons to backpropagate error derivatives so that higher level feature detectors can fine-tune the receptive fields of lower level ones.
Note: Due to technical problems, the first few minutes of the talk were not taped, and approximately 2.5 minutes were not recorded about 3:40 minutes into the video.